364 research outputs found

    Convex Stochastic Fluid Programs with Average Cost

    Get PDF

    Visualization of the homogeneous charge compression ignition/controlled autoignition combustion process using two-dimensional planar laser-induced fluorescence imaging of formaldehyde

    Get PDF
    The paper reports an investigation into the HCCI/CAI combustion process using the two-dimensional PLIF technique. The PLIF of formaldehyde formed during the low-temperature reactions of HCCI/CAI combustion was exciting by a tunable dye laser at 355nm wavelength and detected by a gated ICCD camera. Times and locations of the two-stage autoignition of HCCI/CAI combustion were observed in a single cylinder optical engine for several fuel blends mixed with n-heptane and iso-octane. The results show, when pure n-heptane was used, the initial formation of formaldehyde and its subsequent burning were closely related to the start of the low temperature heat release stage and the start of the main heat release stage of HCCI combustion respectively. Meanwhile, it was found that the formation of formaldehyde was more affected by the charge temperature than by the fuel concentration. But its subsequent burning or the start of main heat release combustion toke place at those areas where both the fuel concentration and the charge temperature were sufficient high. As a result, it was found that the presence of stratified residual gases affected both the spatial location and the temporal site of autoignition in a HCCI/CAI combustion engine. All studied fuels were found having similar formaldehyde formation timings with n-heptane. This means that the presence of iso-octane did not affect the start of low temperature reactions apparently. However, the heat release during low temperature reaction was significantly reduced with the presence of iso-octane in the studied fuels. In addition, the presence of iso-octane retarded the start of the main combustion stage

    Markov Decision Processes

    Get PDF
    The theory of Markov Decision Processes is the theory of controlled Markov chains. Its origins can be traced back to R. Bellman and L. Shapley in the 1950\u27s. During the decades of the last century this theory has grown dramatically. It has found applications in various areas like e.g. computer science, engineering, operations research, biology and economics. In this article we give a short introduction to parts of this theory. We treat Markov Decision Processes with finite and infinite time horizon where we will restrict the presentation to the so-called (generalized) negative case. Solution algorithms like Howard\u27s policy improvement and linear programming are also explained. Various examples show the application of the theory. We treat stochastic linear-quadratic control problems, bandit problems and dividend pay-out problems

    Markov Decision Processes with Average-Value-at-Risk criteria

    Get PDF
    We investigate the problem of minimizing the Average-Value-at-Risk (AV aRr) of the discounted cost over a finite and an infinite horizon which is generated by a Markov Decision Process (MDP). We show that this problem can be reduced to an ordinary MDP with extended state space and give conditions under which an optimal policy exists. We also give a time-consistent interpretation of the AV aRr . At the end we consider a numerical example which is a simple repeated casino game. It is used to discuss the influence of the risk aversion parameter r of the AV aRr-criterion

    Dependence Properties of exit times with applications to risk management

    Get PDF
    We investigate the dependence structure of d-dimensional Itˆo processes which are not necessarily time-homogeneous. Sufficient conditions are given which imply that the processes are associated, i.e. show a certain kind of positive dependence. We also prove that associated processes have associated hitting times. Some applications in risk management are given

    Enhanced self-field critical current density of nano-composite YBa(2)Cu(3)O(7) thin films grown by pulsed-laser deposition

    Get PDF
    This is the author's accepted manuscript. The final published article is available from the link below. Copyright @ EPLA, 2008.Enhanced self-field critical current density Jc of novel, high-temperature superconducting thin films is reported. Layers are deposited on (001) MgO substrates by laser ablation of YBa2Cu3O7−ή(Y-123) ceramics containing Y2Ba4CuMOx (M-2411, M=Ag, Nb, Ru, Zr) nano-particles. The Jc of films depends on the secondary-phase content of the ceramic targets, which was varied between 0 and 15 mol%. Composite layers (2 mol% of Ag-2411 and Nb-2411) exhibit Jc values at 77 K of up to 5.1 MA/cm2, which is 3 to 4 times higher than those observed in films deposited from phase pure Y-123 ceramics. Nb-2411 grows epitaxially in the composite layers and the estimated crystallite size is ~10 nm.The Austrian Science Fund, the Austrian Federal Ministry of Economics and Labour, the European Science Foundation and the Higher Education Commission of Pakistan

    Discounted Stochastic Fluid Programs

    Get PDF

    Routing of airplanes to two runways: monotonicity of optimal controls

    Get PDF
    We consider the problem of routing incoming airplanes to two runways of an airport. Due to air turbulence, the necessary separation time between two successive landing operations depends on the types of the airplanes. When viewed as a queueing problem, this means that we have dependent service times. The aim is to minimise waiting times of aircrafts. We consider here a model where arrivals form a stochastic process and where the decision maker does not know anything about future arrivals. We formulate this as a problem of stochastic dynamic programming and investigate monotonicity of optimal routing strategies with respect e.g. to the workload of the runways. We show that an optimal strategy is monotone (i.e. of switching type) only in a restricted case where decisions depend on the state of the runways only and not on the type of the arriving aircraft. Surprisingly, in the more realistic case where this type is also known to the decision maker, monotonicity need not hold

    Optimal control of piecewise deterministic Markov processes with finite time horizon

    Get PDF
    In this paper we study controlled Piecewise Deterministic Markov Processes with finite time horizon and unbounded rewards. Using an embedding procedure we reduce these problems to discrete-time Markov Decision Processes. Under some continuity and compactness conditions we establish the existence of an optimal policy and show that the value function is the unique solution of the Bellman equation. It is remarkable that this statement is true for unbounded rewards and without any contraction assumptions. Further conditions imply the existence of optimal nonrelaxed controls. We highlight our findings by two examples from financial mathematics

    Control improvement for jump-diffusion processes with applications to finance

    Get PDF
    We consider stochastic control problems with jump-diffusion processes and formulate an algorithm which produces, starting from a given admissible control Pi, a new control with a better value. If no improvement is possible, then Pi is optimal. Such an algorithm is well-known for discrete-time Markov Decision Problems under the name Howard’s policy improvement algorithm. The idea can be traced back to Bellman. Here we show with the help of martingale techniques that such an algorithm can also be formulated for stochastic control problems with jump-diffusion processes. As an application we derive some interesting results in portfolio optimization
    • 

    corecore